20 research outputs found

    Efficient Collective Action for Tackling Time-Critical Cybersecurity Threats

    Full text link
    The latency reduction between the discovery of vulnerabilities, the build-up and dissemination of cyber-attacks has put significant pressure on cybersecurity professionals. For that, security researchers have increasingly resorted to collective action in order to reduce the time needed to characterize and tame outstanding threats. Here, we investigate how joining and contributions dynamics on MISP, an open source threat intelligence sharing platform, influence the time needed to collectively complete threat descriptions. We find that performance, defined as the capacity to characterize quickly a threat event, is influenced by (i) its own complexity (negatively), by (ii) collective action (positively), and by (iii) learning, information integration and modularity (positively). Our results inform on how collective action can be organized at scale and in a modular way to overcome a large number of time-critical tasks, such as cybersecurity threats.Comment: 23 pages, 3 figures. Presented at the 21st Workshop on the Economics of Information Security (WEIS), 2022, Tulsa, US

    Beyond S-curves: Recurrent Neural Networks for Technology Forecasting

    Full text link
    Because of the considerable heterogeneity and complexity of the technological landscape, building accurate models to forecast is a challenging endeavor. Due to their high prevalence in many complex systems, S-curves are a popular forecasting approach in previous work. However, their forecasting performance has not been directly compared to other technology forecasting approaches. Additionally, recent developments in time series forecasting that claim to improve forecasting accuracy are yet to be applied to technological development data. This work addresses both research gaps by comparing the forecasting performance of S-curves to a baseline and by developing an autencoder approach that employs recent advances in machine learning and time series forecasting. S-curves forecasts largely exhibit a mean average percentage error (MAPE) comparable to a simple ARIMA baseline. However, for a minority of emerging technologies, the MAPE increases by two magnitudes. Our autoencoder approach improves the MAPE by 13.5% on average over the second-best result. It forecasts established technologies with the same accuracy as the other approaches. However, it is especially strong at forecasting emerging technologies with a mean MAPE 18% lower than the next best result. Our results imply that a simple ARIMA model is preferable over the S-curve for technology forecasting. Practitioners looking for more accurate forecasts should opt for the presented autoencoder approach.Comment: 16 pages, 8 figure

    Identifying Emerging Technologies and Leading Companies using Network Dynamics of Patent Clusters: a Cybersecurity Case Study

    Full text link
    Strategic decisions rely heavily on non-scientific instrumentation to forecast emerging technologies and leading companies. Instead, we build a fast quantitative system with a small computational footprint to discover the most important technologies and companies in a given field, using generalisable methods applicable to any industry. With the help of patent data from the US Patent and Trademark Office, we first assign a value to each patent thanks to automated machine learning tools. We then apply network science to track the interaction and evolution of companies and clusters of patents (i.e. technologies) to create rankings for both sets that highlight important or emerging network nodes thanks to five network centrality indices. Finally, we illustrate our system with a case study based on the cybersecurity industry. Our results produce useful insights, for instance by highlighting (i) emerging technologies with a growing mean patent value and cluster size, (ii) the most influential companies in the field and (iii) attractive startups with few but impactful patents. Complementary analysis also provides evidence of decreasing marginal returns of research and development in larger companies in the cybersecurity industry.Comment: 24 pages, 8 figure

    Scientometric and Wikipedia pageview analysis

    No full text
    Any trend assessment concerning data protection and encryption technologies constitutes a challenging task for various reasons. The swift development of security technologies brings a myriad of novel protocols, tools, and procedures whose technological readiness levels also evolve rapidly. We used a benchmarking development indicator, the attention brought by different communities, to perform the analysis. This attention was measured through a scientometric analysis of the production of scientific works, and the public attention was given to these technologies through Wikipedia pageviews. This analysis provides valuable insights into the development of data protection and encryption technologies and their impact on the security landscape

    Knowledge absorption for cyber-security: The role of human beliefs

    No full text
    We investigate how human beliefs are associated with the absorption of specialist knowledge that is required to produce cyber-security. We ground our theorizing in the knowledge-based view of the firm and transaction-cost economics. We test our hypotheses with a sample of 262 members of an information-sharing and analysis center who share sensitive information related to cyber-security. Our findings suggest that resource belief, usefulness belief, and reciprocity belief are all positively associated with knowledge absorption, whereas reward belief is not. The implications of these findings for practitioners and future research are discussed.ISSN:0747-5632ISSN:1873-769

    LLM-based entity extraction is not for cybersecurity

    No full text
    The cybersecurity landscape evolves rapidly and poses threats to organizations. To enhance resilience, one needs to track the latest developments and trends in the domain. For this purpose, we use large language models (LLMs) to extract relevant knowledge entities from cybersecurity-related texts. We use a subset of arXiv preprints on cybersecurity as our data and compare different LLMs in terms of entity recognition (ER) and relevance. The results suggest that LLMs do not produce good knowledge entities that reflect the cybersecurity context

    Forecasting labor needs for digitalization ::a bi-partite graph machine learning approach

    No full text
    We use a unique database of digital, and cybersecurity hires from Swiss organizations and develop a method based on a temporal bi-partite network, which combines local and global indices through a Support Vector Machine. We predict the appearance and disappearance of job openings from one to six months horizons. We show that global indices yield the highest predictive power, although the local network does contribute to long-term forecasts. At the one-month horizon, the “area under the curve” and the “average precision” are 0.984 and 0.905, respectively. At the six-month horizon, they reach 0.864 and 0.543, respectively. Our study highlights the link between the skilled workforce and the digital revolution and the policy implications regarding intellectual property and technology forecasting
    corecore